Generalized Bregman Distances and Convergence Rates for Non-convex Regularization Methods
نویسنده
چکیده
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ holds, if the regularization term has a slightly faster growth at zero than |t|p. AMS classification scheme numbers: 65J20; 47A52, 52A01
منابع مشابه
The residual method for regularizing ill-posed problems
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existin...
متن کاملConvergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities
In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicat...
متن کاملMorozov’s Discrepancy Principle for Tikhonov-type functionals with non-linear operators
In this paper we deal with Morozov’s discrepancy principle as an aposteriori parameter choice rule for Tikhonov regularization with general convex penalty terms Ψ for non-linear inverse problems. It is shown that a regularization parameter α fulfilling the discprepancy principle exists, whenever the operator F satisfies some basic conditions, and that for this parameter choice rule holds α→ 0, ...
متن کاملOn Dual Convergence of the Generalized Proximal Point Method with Bregman Distances
TTie use of generalized distances (e.g.. Bregman distances), instead of Ihe Euclidean one, in the proximal point method for convex optimization, allows for elimination of the inequality constraints from the subproblems. In this paper we consider the proximal point method with Bregman distances applied to linearly constrained convex optimization problems, and study the behavior of the dual seque...
متن کاملConvex Regularization of Local Volatility Models from Option Prices: Convergence Analysis and Rates
We study a convex regularization of the local volatility surface identification problem for the Black-Scholes partial differential equation from prices of European call options. This is a highly nonlinear ill-posed problem which in practice is subject to different noise levels associated to bid-ask spreads and sampling errors. We analyze, in appropriate function spaces, different properties of ...
متن کامل